CS 269 : Machine Learning Theory Lecture 16 : SVMs and Kernels

نویسندگان

  • Jennifer Wortman Vaughan
  • Jason Au
  • Ling Fang
  • Kwanho Lee
چکیده

We previously showed that the solution to the primal problem is equivalent to the solution to the dual problem if they satisfy the following primal-dual equivalence conditions. First, we need a convex objective function and in our case, it is 1 2 ||~ w||2. Second, we need convex inequality constraints gi, which are 1−yi(~ w · ~xi + b) for i = 1, ...,m. The last condition states that for each inequality gi, there exists some value of the variables that make gi strictly less than 0. Since these conditions are satisfied for the optimization problem we defined, we know that we can solve the dual version of the problem instead of the primal problem and the solutions will be equivalent.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

CS 269 : Machine Learning Theory Lecture 14 : Generalization Error of Adaboost

In this lecture we will continue our discussion of the Adaboost algorithm and derive a bound on the generalization error. We saw last time that the training error decreases exponentially with respect to the number of rounds T . However, we also want to see the performance of this algorithm on new test data. Today we will show why the Adaboost algorithm generalizes so well and why it avoids over...

متن کامل

Cs229 Lecture Notes Support Vector Machines

This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best) “off-the-shelf” supervised learning algorithm. To tell the SVM story, we’ll need to first talk about margins and the idea of separating data with a large “gap.” Next, we’ll talk about the optimal margin classifier, which will lead us into a digression on...

متن کامل

CS 269 : Machine Learning Theory Lecture 4 : Infinite Function Classes

Before stating Hoeffding’s Inequality, we recall two intermediate results that we will use in order to prove it. One is Markov’s Inequality and other is Hoeffding’s Lemma. (Note that in class we did not cover Hoeffding’s Lemma, and only gave a brief outline of the Chernoff Bounding Techniques and how they are used to prove Hoeffding’s Inequality. Here we give a full proof of Hoeffding’s Inequal...

متن کامل

Data-dependent kernels in svm classification of speech patterns

Support Vector Machines (SVMs) have recently proved to be powerful pattern classi cation tools with a strong connection to statistical learning theory. One of the hurdles to using SVMs in speech recognition, and a crucial aspect of SVM design in general, is the choice of the kernel function for non-separable data, and the setting of its parameters. This is often based on experience or a potenti...

متن کامل

DIPLOMARBEIT Support Vector Machines for Regression Estimation and their Application to Chaotic Time Series Prediction

Support vector machines (SVMs) are a quite recent supervised learning approach towards function estimation. They combine several results from statistical learning theory, optimisation theory, and machine learning, and employ kernels as one of their most important ingredients. The present work covers the theory of SVMs with emphasis on SVMs for regression estimation, and the problem of chaotic t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010